Goto

Collaborating Authors

 virtual data


Local Bayesian Optimization for Controller Tuning with Crash Constraints

von Rohr, Alexander, Stenger, David, Scheurenberg, Dominik, Trimpe, Sebastian

arXiv.org Artificial Intelligence

Controller tuning is crucial for closed-loop performance but often involves manual adjustments. Although Bayesian optimization (BO) has been established as a data-efficient method for automated tuning, applying it to large and high-dimensional search spaces remains challenging. We extend a recently proposed local variant of BO to include crash constraints, where the controller can only be successfully evaluated in an a-priori unknown feasible region. We demonstrate the efficiency of the proposed method through simulations and hardware experiments. Our findings showcase the potential of local BO to enhance controller performance and reduce the time and resources necessary for tuning.


Rapid Gyroscope Calibration: A Deep Learning Approach

Stolero, Yair, Klein, Itzik

arXiv.org Artificial Intelligence

Low-cost gyroscope calibration is essential for ensuring the accuracy and reliability of gyroscope measurements. Stationary calibration estimates the deterministic parts of measurement errors. To this end, a common practice is to average the gyroscope readings during a predefined period and estimate the gyroscope bias. Calibration duration plays a crucial role in performance, therefore, longer periods are preferred. However, some applications require quick startup times and calibration is therefore allowed only for a short time. In this work, we focus on reducing low-cost gyroscope calibration time using deep learning methods. We propose a deep-learning framework and explore the possibilities of using multiple real and virtual gyroscopes to improve the calibration performance of single gyroscopes. To train and validate our approach, we recorded a dataset consisting of 169 hours of gyroscope readings, using 24 gyroscopes of two different brands. We also created a virtual dataset consisting of simulated gyroscope readings. The two datasets were used to evaluate our proposed approach. One of our key achievements in this work is reducing gyroscope calibration time by up to 89% using three low-cost gyroscopes.


Federated Virtual Learning on Heterogeneous Data with Local-global Distillation

Huang, Chun-Yin, Jin, Ruinan, Zhao, Can, Xu, Daguang, Li, Xiaoxiao

arXiv.org Artificial Intelligence

Despite Federated Learning (FL)'s trend for learning machine learning models in a distributed manner, it is susceptible to performance drops when training on heterogeneous data. In addition, FL inevitability faces the challenges of synchronization, efficiency, and privacy. Recently, dataset distillation has been explored in order to improve the efficiency and scalability of FL by creating a smaller, synthetic dataset that retains the performance of a model trained on the local private datasets. We discover that using distilled local datasets can amplify the heterogeneity issue in FL. To address this, we propose a new method, called Federated Virtual Learning on Heterogeneous Data with Local-Global Distillation (FedLGD), which trains FL using a smaller synthetic dataset (referred as virtual data) created through a combination of local and global dataset distillation. Specifically, to handle synchronization and class imbalance, we propose iterative distribution matching to allow clients to have the same amount of balanced local virtual data; to harmonize the domain shifts, we use federated gradient matching to distill global virtual data that are shared with clients without hindering data privacy to rectify heterogeneous local training via enforcing local-global feature similarity. We experiment on both benchmark and real-world datasets that contain heterogeneous data from different sources, and further scale up to an FL scenario that contains large number of clients with heterogeneous and class imbalance data. Our method outperforms state-of-the-art heterogeneous FL algorithms under various settings with a very limited amount of distilled virtual data.


Verge.io unveils shared, virtualized GPU computing

#artificialintelligence

Verge.io, the company with a simpler way to virtualize data centers, has added new features to its Verge-OS software to give users the performance of GPUs as virtualized, shared resources. This creates a cost-effective, simple and flexible way to perform GPU-based machine learning, remote desktop, and other compute-intensive workloads within an agile, scalable, secure Verge-OS virtual data center. Verge-OS abstracts compute, network, and storage from commodity servers and creates pools of raw resources that are simple to run and manage, creating feature-rich infrastructures for environments and workloads like clustered HPC in universities, ultra-converged and hyper-converged enterprises, DevOps, and Test/Dev, compliant medical and healthcare, remote and edge compute including VDI, and xSPs offering hosted services including private clouds. Current methods for deploying GPUs systemwide are complex and expensive, especially for remote users. Users and administrators can pass through an installed GPU to a virtual data center by simply creating a virtual machine with access to that GPU and its resources.


EEMC: Embedding Enhanced Multi-tag Classification

Li, Yanlin, An, Shi, Zhang, Ruisheng

arXiv.org Machine Learning

The recently occurred representation learning make an attractive performance in NLP and complex network, it is becoming a fundamental technology in machine learning and data mining. How to use representation learning to improve the performance of classifiers is a very significance research direction. We using representation learning technology to map raw data(node of graph) to a low-dimensional feature space. In this space, each raw data obtained a lower dimensional vector representation, we do some simple linear operations for those vectors to produce some virtual data, using those vectors and virtual data to training multi-tag classifier. After that we measured the performance of classifier by F1 score(Macro% F1 and Micro% F1). Our method make Macro F1 rise from 28 % - 450% and make average F1 score rise from 12 % - 224%. By contrast, we trained the classifier directly with the lower dimensional vector, and measured the performance of classifiers. We validate our algorithm on three public data sets, we found that the virtual data helped the classifier greatly improve the F1 score. Therefore, our algorithm is a effective way to improve the performance of classifier. These result suggest that the virtual data generated by simple linear operation, in representation space, still retains the information of the raw data. It's also have great significance to the learning of small sample data sets.